Discussion of “ High - dimensional autocovariance matrices and optimal linear prediction ” ∗ , †
نویسندگان
چکیده
First, we would like to congratulate Prof. McMurry and Prof. Politis for their thought-provoking paper on the optimal linear prediction based on full time series sample (hereafter, referred as [MP15]). [MP15] considered the one-step optimal linear predictor X∗ n+1 = ∑n i=1 φi(n)Xn+1−i of a univariate time series X1, . . . , Xn in the ` 2 sense which is given by the solution of the Yule-Walker equations φ(n) = Γ−1 n γ(n).
منابع مشابه
High-dimensional autocovariance matrices and optimal linear prediction
A new methodology for optimal linear prediction of a stationary time series is introduced. Given a sample X1, . . . , Xn, the optimal linear predictor of Xn+1 is X̃n+1 = φ1(n)Xn + φ2(n)Xn−1 + . . .+φn(n)X1. In practice, the coefficient vector φ(n) ≡ (φ1(n), φ2(n), . . . , φn(n)) is routinely truncated to its first p components in order to be consistently estimated. By contrast, we employ a consi...
متن کاملREJOINDER: High-dimensional autocovariance matrices and optimal linear prediction
We would like to sincerely thank all discussants for their kind remarks and insightful comments. To start with, we wholeheartedly welcome the proposal of Rob Hyndman for a “better acf” plot based on our vector estimator γ̂∗(n) from Section 3.2. As mentioned, the sample autocovariance is not a good estimate for the vector γ(n), and this is especially apparent in the wild excursions it takes at hi...
متن کاملInverse moment bounds for sample autocovariance matrices based on detrended time series and their applications
In this paper, we assume that observations are generated by a linear regression model with shortor long-memory dependent errors. We establish inverse moment bounds for kn-dimensional sample autocovariance matrices based on the least squares residuals (also known as the detrended time series), where kn n, kn → ∞ and n is the sample size. These results are then used to derive the mean-square erro...
متن کاملModified Burg Algorithms for Multivariate Subset Autoregression
Lattice algorithms for estimating the parameters of a multivariate autoregression are generalized to deal with subset models in which some of the coefficient matrices are constrained to be zero. We first establish a recursive prediction-error version of the empirical Yule-Walker equations. The estimated coefficient matrices obtained from these recursions are the coefficients of the best linear ...
متن کاملNew Optimal Observer Design Based on State Prediction for a Class of Non-linear Systems Through Approximation
This paper deals with the optimal state observer of non-linear systems based on a new strategy. Despite the development of state prediction in linear systems, state prediction for non-linear systems is still challenging. In this paper, to obtain a future estimation of the system states, initially Taylor series expansion of states in their receding horizons was achieved to any specified order an...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015